75 research outputs found
Nonparametric Feature Extraction from Dendrograms
We propose feature extraction from dendrograms in a nonparametric way. The
Minimax distance measures correspond to building a dendrogram with single
linkage criterion, with defining specific forms of a level function and a
distance function over that. Therefore, we extend this method to arbitrary
dendrograms. We develop a generalized framework wherein different distance
measures can be inferred from different types of dendrograms, level functions
and distance functions. Via an appropriate embedding, we compute a vector-based
representation of the inferred distances, in order to enable many numerical
machine learning algorithms to employ such distances. Then, to address the
model selection problem, we study the aggregation of different dendrogram-based
distances respectively in solution space and in representation space in the
spirit of deep representations. In the first approach, for example for the
clustering problem, we build a graph with positive and negative edge weights
according to the consistency of the clustering labels of different objects
among different solutions, in the context of ensemble methods. Then, we use an
efficient variant of correlation clustering to produce the final clusters. In
the second approach, we investigate the sequential combination of different
distances and features sequentially in the spirit of multi-layered
architectures to obtain the final features. Finally, we demonstrate the
effectiveness of our approach via several numerical studies
Unsupervised Representation Learning with Minimax Distance Measures
We investigate the use of Minimax distances to extract in a nonparametric way
the features that capture the unknown underlying patterns and structures in the
data. We develop a general-purpose and computationally efficient framework to
employ Minimax distances with many machine learning methods that perform on
numerical data. We study both computing the pairwise Minimax distances for all
pairs of objects and as well as computing the Minimax distances of all the
objects to/from a fixed (test) object.
We first efficiently compute the pairwise Minimax distances between the
objects, using the equivalence of Minimax distances over a graph and over a
minimum spanning tree constructed on that. Then, we perform an embedding of the
pairwise Minimax distances into a new vector space, such that their squared
Euclidean distances in the new space equal to the pairwise Minimax distances in
the original space. We also study the case of having multiple pairwise Minimax
matrices, instead of a single one. Thereby, we propose an embedding via first
summing up the centered matrices and then performing an eigenvalue
decomposition to obtain the relevant features.
In the following, we study computing Minimax distances from a fixed (test)
object which can be used for instance in K-nearest neighbor search. Similar to
the case of all-pair pairwise Minimax distances, we develop an efficient and
general-purpose algorithm that is applicable with any arbitrary base distance
measure. Moreover, we investigate in detail the edges selected by the Minimax
distances and thereby explore the ability of Minimax distances in detecting
outlier objects.
Finally, for each setting, we perform several experiments to demonstrate the
effectiveness of our framework.Comment: 32 page
Modeling Transitivity in Complex Networks
An important source of high clustering coefficient in real-world networks is
transitivity. However, existing approaches for modeling transitivity suffer
from at least one of the following problems: i) they produce graphs from a
specific class like bipartite graphs, ii) they do not give an analytical
argument for the high clustering coefficient of the model, and iii) their
clustering coefficient is still significantly lower than real-world networks.
In this paper, we propose a new model for complex networks which is based on
adding transitivity to scale-free models. We theoretically analyze the model
and provide analytical arguments for its different properties. In particular,
we calculate a lower bound on the clustering coefficient of the model which is
independent of the network size, as seen in real-world networks. More than
theoretical analysis, the main properties of the model are evaluated
empirically and it is shown that the model can precisely simulate real-world
networks from different domains with and different specifications.Comment: 16 pages, 4 figures, 3 table
Learning representations from dendrograms
We propose unsupervised representation learning and feature extraction from dendrograms. The commonly used Minimax distance measures correspond to building a dendrogram with single linkage criterion, with defining specific forms of a level function and a distance function over that. Therefore, we extend this method to arbitrary dendrograms. We develop a generalized framework wherein different distance measures and representations can be inferred from different types of dendrograms, level functions and distance functions. Via an appropriate embedding, we compute a vector-based representation of the inferred distances, in order to enable many numerical machine learning algorithms to employ such distances. Then, to address the model selection problem, we study the aggregation of different dendrogram-based distances respectively in solution space and in representation space in the spirit of deep representations. In the first approach, for example for the clustering problem, we build a graph with positive and negative edge weights according to the consistency of the clustering labels of different objects among different solutions, in the context of ensemble methods. Then, we use an efficient variant of correlation clustering to produce the final clusters. In the second approach, we investigate the combination of different distances and features sequentially in the spirit of multi-layered architectures to obtain the final features. Finally, we demonstrate the effectiveness of our approach via several numerical studies
Unsupervised representation learning with Minimax distance measures
We investigate the use of Minimax distances to extract in a nonparametric way the features that capture the unknown underlying patterns and structures in the data. We develop a general-purpose and computationally efficient framework to employ Minimax distances with many machine learning methods that perform on numerical data. We study both computing the pairwise Minimax distances for all pairs of objects and as well as computing the Minimax distances of all the objects to/from a fixed (test) object. We first efficiently compute the pairwise Minimax distances between the objects, using the equivalence of Minimax distances over a graph and over a minimum spanning tree constructed on that. Then, we perform an embedding of the pairwise Minimax distances into a new vector space, such that their squared Euclidean distances in the new space equal to the pairwise Minimax distances in the original space. We also study the case of having multiple pairwise Minimax matrices, instead of a single one. Thereby, we propose an embedding via first summing up the centered matrices and then performing an eigenvalue decomposition to obtain the relevant features. In the following, we study computing Minimax distances from a fixed (test) object which can be used for instance in K-nearest neighbor search. Similar to the case of all-pair pairwise Minimax distances, we develop an efficient and general-purpose algorithm that is applicable with any arbitrary base distance measure. Moreover, we investigate in detail the edges selected by the Minimax distances and thereby explore the ability of Minimax distances in detecting outlier objects. Finally, for each setting, we perform several experiments to demonstrate the effectiveness of our framework
Trip Prediction by Leveraging Trip Histories from Neighboring Users
We propose a novel approach for trip prediction by analyzing user's trip
histories. We augment users' (self-) trip histories by adding 'similar' trips
from other users, which could be informative and useful for predicting future
trips for a given user. This also helps to cope with noisy or sparse trip
histories, where the self-history by itself does not provide a reliable
prediction of future trips. We show empirical evidence that by enriching the
users' trip histories with additional trips, one can improve the prediction
error by 15%-40%, evaluated on multiple subsets of the Nancy2012 dataset. This
real-world dataset is collected from public transportation ticket validations
in the city of Nancy, France. Our prediction tool is a central component of a
trip simulator system designed to analyze the functionality of public
transportation in the city of Nancy
Efficient Optimization of Dominant Set Clustering with Frank-Wolfe Algorithms
We study Frank-Wolfe algorithms -- standard, pairwise, and away-steps -- for
efficient optimization of Dominant Set Clustering. We present a unified and
computationally efficient framework to employ the different variants of
Frank-Wolfe methods, and we investigate its effectiveness via several
experimental studies. In addition, we provide explicit convergence rates for
the algorithms in terms of the so-called Frank-Wolfe gap. The theoretical
analysis has been specialized to the problem of Dominant Set Clustering and is
thus more easily accessible compared to prior work
Efficient Optimization of Dominant Set Clustering with Frank-Wolfe Algorithms
We study Frank-Wolfe algorithms - standard, pairwise, and away-steps - for efficient optimization of Dominant Set Clustering. We present a unified and computationally efficient framework to employ the different variants of Frank-Wolfe methods, and we investigate its effectiveness via several experimental studies. In addition, we provide explicit convergence rates for the algorithms in terms of the so-called Frank-Wolfe gap. The theoretical analysis has been specialized to Dominant Set Clustering and covers consistently the different variants
- …